15 research outputs found

    Finding a low-rank basis in a matrix subspace

    Full text link
    For a given matrix subspace, how can we find a basis that consists of low-rank matrices? This is a generalization of the sparse vector problem. It turns out that when the subspace is spanned by rank-1 matrices, the matrices can be obtained by the tensor CP decomposition. For the higher rank case, the situation is not as straightforward. In this work we present an algorithm based on a greedy process applicable to higher rank problems. Our algorithm first estimates the minimum rank by applying soft singular value thresholding to a nuclear norm relaxation, and then computes a matrix with that rank using the method of alternating projections. We provide local convergence results, and compare our algorithm with several alternative approaches. Applications include data compression beyond the classical truncated SVD, computing accurate eigenvectors of a near-multiple eigenvalue, image separation and graph Laplacian eigenproblems

    On orthogonal tensors and best rank-one approximation ratio

    Full text link
    As is well known, the smallest possible ratio between the spectral norm and the Frobenius norm of an m×nm \times n matrix with mnm \le n is 1/m1/\sqrt{m} and is (up to scalar scaling) attained only by matrices having pairwise orthonormal rows. In the present paper, the smallest possible ratio between spectral and Frobenius norms of n1××ndn_1 \times \dots \times n_d tensors of order dd, also called the best rank-one approximation ratio in the literature, is investigated. The exact value is not known for most configurations of n1ndn_1 \le \dots \le n_d. Using a natural definition of orthogonal tensors over the real field (resp., unitary tensors over the complex field), it is shown that the obvious lower bound 1/n1nd11/\sqrt{n_1 \cdots n_{d-1}} is attained if and only if a tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal or unitary tensors exist depends on the dimensions n1,,ndn_1,\dots,n_d and the field. A connection between the (non)existence of real orthogonal tensors of order three and the classical Hurwitz problem on composition algebras can be established: existence of orthogonal tensors of size ×m×n\ell \times m \times n is equivalent to the admissibility of the triple [,m,n][\ell,m,n] to the Hurwitz problem. Some implications for higher-order tensors are then given. For instance, real orthogonal n××nn \times \dots \times n tensors of order d3d \ge 3 do exist, but only when n=1,2,4,8n = 1,2,4,8. In the complex case, the situation is more drastic: unitary tensors of size ×m×n\ell \times m \times n with mn\ell \le m \le n exist only when mn\ell m \le n. Finally, some numerical illustrations for spectral norm computation are presented

    A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity

    Get PDF
    In monotone submodular function maximization, approximation guarantees based on the curvature of the objective function have been extensively studied in the literature. However, the notion of curvature is often pessimistic, and we rarely obtain improved approximation guarantees, even for very simple objective functions. In this paper, we provide a novel approximation guarantee by extracting an M^{natural}-concave function h:2^E -> R_+, a notion in discrete convex analysis, from the objective function f:2^E -> R_+. We introduce a novel notion called the M^{natural}-concave curvature of a given set function f, which measures how much f deviates from an M^{natural}-concave function, and show that we can obtain a (1-gamma/e-epsilon)-approximation to the problem of maximizing f under a cardinality constraint in polynomial time, where gamma is the value of the M^{natural}-concave curvature and epsilon > 0 is an arbitrary constant. Then, we show that we can obtain nontrivial approximation guarantees for various problems by applying the proposed algorithm

    Optimal algorithms for group distributionally robust optimization and beyond

    Full text link
    Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk (CVaR) optimization. Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings. We also provide a new information-theoretic lower bound that implies our bounds are tight for group DRO. Empirically, too, our algorithms outperform known method

    Algebraic combinatorial optimization on the degree of determinants of noncommutative symbolic matrices

    Full text link
    We address the computation of the degrees of minors of a noncommutative symbolic matrix of form A[c]:=k=1mAktckxk, A[c] := \sum_{k=1}^m A_k t^{c_k} x_k, where AkA_k are matrices over a field K\mathbb{K}, xix_i are noncommutative variables, ckc_k are integer weights, and tt is a commuting variable specifying the degree. This problem extends noncommutative Edmonds' problem (Ivanyos et al. 2017), and can formulate various combinatorial optimization problems. Extending the study by Hirai 2018, and Hirai, Ikeda 2022, we provide novel duality theorems and polyhedral characterization for the maximum degrees of minors of A[c]A[c] of all sizes, and develop a strongly polynomial-time algorithm for computing them. This algorithm is viewed as a unified algebraization of the classical Hungarian method for bipartite matching and the weight-splitting algorithm for linear matroid intersection. As applications, we provide polynomial-time algorithms for weighted fractional linear matroid matching and linear optimization over rank-2 Brascamp-Lieb polytopes

    機械学習と通信のための劣モジュラ・スパース最適化手法

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 岩田 覚, 東京大学教授 定兼 邦彦, 東京大学教授 山本 博資, 東京大学准教授 武田 朗子, 東京大学准教授 平井 広志University of Tokyo(東京大学
    corecore